Mitosis nuclei count is one of the important indicators for the pathological diagnosis of breast cancer. The manual annotation needs experienced pathologists, which is very time-consuming and inefficient. With the development of deep learning methods, some models with good performance have emerged, but the generalization ability should be further strengthened. In this paper, we propose a two-stage mitosis segmentation and classification method, named SCMitosis. Firstly, the segmentation performance with a high recall rate is achieved by the proposed depthwise separable convolution residual block and channel-spatial attention gate. Then, a classification network is cascaded to further improve the detection performance of mitosis nuclei. The proposed model is verified on the ICPR 2012 dataset, and the highest F-score value of 0.8687 is obtained compared with the current state-of-the-art algorithms. In addition, the model also achieves good performance on GZMH dataset, which is prepared by our group and will be firstly released with the publication of this paper. The code will be available at: https://github.com/antifen/mitosis-nuclei-segmentation.
translated by 谷歌翻译
Learning continuous image representations is recently gaining popularity for image super-resolution (SR) because of its ability to reconstruct high-resolution images with arbitrary scales from low-resolution inputs. Existing methods mostly ensemble nearby features to predict the new pixel at any queried coordinate in the SR image. Such a local ensemble suffers from some limitations: i) it has no learnable parameters and it neglects the similarity of the visual features; ii) it has a limited receptive field and cannot ensemble relevant features in a large field which are important in an image; iii) it inherently has a gap with real camera imaging since it only depends on the coordinate. To address these issues, this paper proposes a continuous implicit attention-in-attention network, called CiaoSR. We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features. Furthermore, we embed a scale-aware attention in this implicit attention network to exploit additional non-local information. Extensive experiments on benchmark datasets demonstrate CiaoSR significantly outperforms the existing single image super resolution (SISR) methods with the same backbone. In addition, the proposed method also achieves the state-of-the-art performance on the arbitrary-scale SR task. The effectiveness of the method is also demonstrated on the real-world SR setting. More importantly, CiaoSR can be flexibly integrated into any backbone to improve the SR performance.
translated by 谷歌翻译
Recent years we have witnessed rapid development in NeRF-based image rendering due to its high quality. However, point clouds rendering is somehow less explored. Compared to NeRF-based rendering which suffers from dense spatial sampling, point clouds rendering is naturally less computation intensive, which enables its deployment in mobile computing device. In this work, we focus on boosting the image quality of point clouds rendering with a compact model design. We first analyze the adaption of the volume rendering formulation on point clouds. Based on the analysis, we simplify the NeRF representation to a spatial mapping function which only requires single evaluation per pixel. Further, motivated by ray marching, we rectify the the noisy raw point clouds to the estimated intersection between rays and surfaces as queried coordinates, which could avoid \textit{spatial frequency collapse} and neighbor point disturbance. Composed of rasterization, spatial mapping and the refinement stages, our method achieves the state-of-the-art performance on point clouds rendering, outperforming prior works by notable margins, with a smaller model size. We obtain a PSNR of 31.74 on NeRF-Synthetic, 25.88 on ScanNet and 30.81 on DTU. Code and data are publicly available at https://github.com/seanywang0408/RadianceMapping.
translated by 谷歌翻译
成功的材料选择对于设计和制造产品的设计自动化至关重要。设计师通过通过性能,制造性和可持续性评估选择最合适的材料来利用他们的知识和经验来创建高质量的设计。智能工具可以通过提供从先前的设计中学到的建议来帮助具有不同专业知识的设计师。为了实现这一目标,我们介绍了一个图表表示学习框架,该框架支持组装中身体的物质预测。我们将材料选择任务作为节点级预测任务,对CAD模型的汇编图表示,并使用图形神经网络(GNN)对其进行处理。在Fusion 360画廊数据集上执行的三个实验协议的评估表明我们的方法的可行性,达到了0.75 TOP-3 Micro-F1分数。提出的框架可以扩展到大型数据集,并将设计师的知识纳入学习过程。这些功能使该框架可以作为设计自动化的推荐系统以及未来工作的基准,从而缩小了人类设计师与智能设计代理之间的差距。
translated by 谷歌翻译
变压器被认为是自2018年以来最重要的深度学习模型之一,部分原因是它建立了最先进的记录(SOTA)记录,并有可能取代现有的深神经网络(DNNS)。尽管取得了显着的胜利,但变压器模型的延长周转时间是公认的障碍。序列长度的多样性施加了其他计算开销,其中需要将输入零填充到批处理中的最大句子长度,以容纳并行计算平台。本文针对现场可编程的门阵列(FPGA),并提出了一个连贯的序列长度自适应算法 - 硬件与变压器加速度的共同设计。特别是,我们开发了一个适合硬件的稀疏注意操作员和长度意识的硬件资源调度算法。提出的稀疏注意操作员将基于注意力的模型的复杂性降低到线性复杂性,并减轻片外记忆流量。提出的长度感知资源硬件调度算法动态分配了硬件资源以填充管道插槽并消除了NLP任务的气泡。实验表明,与CPU和GPU实施相比,我们的设计准确度损失很小,并且具有80.2 $ \ times $和2.6 $ \ times $速度,并且比先进的GPU加速器高4 $ \ times $ $ $ \ times $通过Cublas Gemm优化。
translated by 谷歌翻译
域概括人员重新识别旨在将培训的模型应用于未经看明域。先前作品将所有培训域中的数据组合以捕获域不变的功能,或者采用专家的混合来调查特定域的信息。在这项工作中,我们争辩说,域特定和域不变的功能对于提高重新ID模型的泛化能力至关重要。为此,我们设计了一种新颖的框架,我们命名为两流自适应学习(tal),同时模拟这两种信息。具体地,提出了一种特定于域的流以捕获具有批量归一化(BN)参数的训练域统计,而自适应匹配层被设计为动态聚合域级信息。同时,我们在域不变流中设计一个自适应BN层,以近似各种看不见域的统计信息。这两个流自适应地和协作地工作,以学习更广泛的重新ID功能。我们的框架可以应用于单源和多源域泛化任务,实验结果表明我们的框架显着优于最先进的方法。
translated by 谷歌翻译
用于LIDAR点云的快速准确的Panoptic分割系统对于自主驾驶车辆来了解周围物体和场景至关重要。现有方法通常依赖于提案或聚类到分段前景实例。结果,他们努力实现实时性能。在本文中,我们提出了一种用于LIDAR点云的新型实时端到端Panoptic分段网络,称为CPSEG。特别地,CPSEG包括共享编码器,双解码器,任务感知注意模块(TAM)和无簇实例分段头。 TAM旨在强制执行这两个解码器以学习用于语义和实例嵌入的丰富的任务感知功能。此外,CPSEG包含一个新的无簇实例分割头,以根据学习嵌入的嵌入动态占据前景点。然后,它通过找到具有成对嵌入比较的连接的柱子来获取实例标签。因此,将传统的基于提议的或基于聚类的实例分段转换为对成对嵌入比较矩阵的二进制分段问题。为了帮助网络回归实例嵌入,提出了一种快速和确定的深度完成算法,以实时计算每个点云的表面法线。该方法在两个大型自主驾驶数据集中基准测试,即Semantickitti和Nuscenes。值得注意的是,广泛的实验结果表明,CPSEG在两个数据集的实时方法中实现了最先进的结果。
translated by 谷歌翻译
对于视频识别任务,总结了视频片段的整个内容的全局表示为最终性能发挥着重要作用。然而,现有的视频架构通常通过使用简单的全局平均池(GAP)方法来生成它,这具有有限的能力捕获视频的复杂动态。对于图像识别任务,存在证据表明协方差汇总具有比GAP更强的表示能力。遗憾的是,在图像识别中使用的这种普通协方差池是无数的代表,它不能模拟视频中固有的时空结构。因此,本文提出了一个时间 - 细心的协方差池(TCP),插入深度架构结束时,以产生强大的视频表示。具体而言,我们的TCP首先开发一个时间注意力模块,以适应性地校准后续协方差汇集的时空特征,近似地产生细心的协方差表示。然后,时间协方差汇总执行临界协方差表示的时间汇集,以表征校准特征的帧内相关性和帧间互相关。因此,所提出的TCP可以捕获复杂的时间动态。最后,引入了快速矩阵功率归一化以利用协方差表示的几何形状。请注意,我们的TCP是模型 - 不可知的,可以灵活地集成到任何视频架构中,导致TCPNet用于有效的视频识别。使用各种视频架构的六个基准(例如动力学,某事物和电力)的广泛实验显示我们的TCPNet明显优于其对应物,同时具有强大的泛化能力。源代码公开可用。
translated by 谷歌翻译
鉴于单个椅子图像,我们可以提取其3D形状并为其合理的关节和动作提供动画吗?这是一个有趣的新问题,可能有许多下游增强现实和虚拟现实应用。在本文中,我们提出了一种自动化方法来解决从单个图像,索引和动画中重建这种三维通用对象的整个过程。与以往的对象操纵的努力相比,我们的工作超出了2D操纵。此外,我们赋予了诸如椅子的其他刚体物体的合理的人类或类似动物的变形;这导致可行的物体运动方面的灵活性更大。凭经验我们的方法在公共数据集以及我们的内部数据集中令人满意地表明了令人满意的表现;与3D重建和骨架预测的相关任务相比,我们的结果通过明显的余量超越了最先进的。我们的实施和数据集将在纸张接受后公开提供。
translated by 谷歌翻译
Recent works on domain adaptation reveal the effectiveness of adversarial learning on filling the discrepancy between source and target domains. However, two common limitations exist in current adversarial-learning-based methods. First, samples from two domains alone are not sufficient to ensure domain-invariance at most part of latent space. Second, the domain discriminator involved in these methods can only judge real or fake with the guidance of hard label, while it is more reasonable to use soft scores to evaluate the generated images or features, i.e., to fully utilize the inter-domain information. In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains. Domain mixup is jointly conducted on pixel and feature level to improve the robustness of models. Extensive experiments prove that the proposed approach can achieve superior performance on tasks with various degrees of domain shift and data complexity.
translated by 谷歌翻译